Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Curr Pharm Des ; 29(34): 2738-2751, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37916622

RESUMEN

INTRODUCTION: Dose reconstructed based on linear accelerator (linac) log-files is one of the widely used solutions to perform patient-specific quality assurance (QA). However, it has a drawback that the accuracy of log-file is highly dependent on the linac calibration. The objective of the current study is to represent a new practical approach for a patient-specific QA during Volumetric modulated arc therapy (VMAT) using both log-file and calibration errors of linac. METHODS: A total of six cases, including two head and neck neoplasms, two lung cancers, and two rectal carcinomas, were selected. The VMAT-based delivery was optimized by the TPS of Pinnacle^3 subsequently, using Elekta Synergy VMAT linac (Elekta Oncology Systems, Crawley, UK), which was equipped with 80 Multi-leaf collimators (MLCs) and the energy of the ray selected at 6 MV. Clinical mode log-file of this linac was used in this study. A series of test fields validate the accuracy of log-file. Then, six plans of test cases were delivered and log-file of each was obtained. The log-file errors were added to the corresponding plans through the house script and the first reconstructed plan was obtained. Later, a series of tests were performed to evaluate the major calibration errors of the linac (dose-rate, gantry angle, MLC leaf position) and the errors were added to the first reconstruction plan to generate the second reconstruction plan. At last, all plans were imported to Pinnacle and recalculated dose distribution on patient CT and ArcCheck phantom (SUN Nuclear). For the former, both target and OAR dose differences between them were compared. For the latter, γ was evaluated by ArcCheck, and subsequently, the surface dose differences between them were performed. RESULTS: Accuracy of log-file was validated. If error recordings in the log file were only considered, there were four arcs whose proportion of control points with gantry angle errors more than ± 1°larger than 35%. Errors of leaves within ± 0.5 mm were 95% for all arcs. The distinctness of a single control point MU was bigger, but the distinctness of cumulative MU was smaller. The maximum, minimum, and mean doses for all targets were distributed between -6.79E-02-0.42%, -0.38-0.4%, 2.69E-02-8.54E-02% respectively, whereas for all OAR, the maximum and mean dose were distributed between -1.16-2.51%, -1.21-3.12% respectively. For the second reconstructed dose: the maximum, minimum, and mean dose for all targets was distributed between 0.0995~5.7145%, 0.6892~4.4727%, 0.5829~1.8931% separately. Due to OAR, maximum and mean dose distribution was observed between -3.1462~6.8920%, -6.9899~1.9316%, respectively. CONCLUSION: Patient-specific QA based on the log-file could reflect the accuracy of the linac execution plan, which usually has a small influence on dose delivery. When the linac calibration errors were considered, the reconstructed dose was closer to the actual delivery and the developed method was accurate and practical.


Asunto(s)
Neoplasias Pulmonares , Radioterapia de Intensidad Modulada , Humanos , Radioterapia de Intensidad Modulada/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Calibración , Garantía de la Calidad de Atención de Salud/métodos
2.
Math Biosci Eng ; 20(2): 2439-2458, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36899541

RESUMEN

Anti-vascular endothelial growth factor (Anti-VEGF) therapy has become a standard way for choroidal neovascularization (CNV) and cystoid macular edema (CME) treatment. However, anti-VEGF injection is a long-term therapy with expensive cost and may be not effective for some patients. Therefore, predicting the effectiveness of anti-VEGF injection before the therapy is necessary. In this study, a new optical coherence tomography (OCT) images based self-supervised learning (OCT-SSL) model for predicting the effectiveness of anti-VEGF injection is developed. In OCT-SSL, we pre-train a deep encoder-decoder network through self-supervised learning to learn the general features using a public OCT image dataset. Then, model fine-tuning is performed on our own OCT dataset to learn the discriminative features to predict the effectiveness of anti-VEGF. Finally, classifier trained by the features from fine-tuned encoder as a feature extractor is built to predict the response. Experimental results on our private OCT dataset demonstrated that the proposed OCT-SSL can achieve an average accuracy, area under the curve (AUC), sensitivity and specificity of 0.93, 0.98, 0.94 and 0.91, respectively. Meanwhile, it is found that not only the lesion region but also the normal region in OCT image is related to the effectiveness of anti-VEGF.


Asunto(s)
Neovascularización Coroidal , Factor A de Crecimiento Endotelial Vascular , Humanos , Neovascularización Coroidal/metabolismo , Neovascularización Coroidal/patología , Sensibilidad y Especificidad , Aprendizaje Automático Supervisado , Tomografía de Coherencia Óptica/métodos
3.
Tomography ; 8(5): 2218-2231, 2022 09 02.
Artículo en Inglés | MEDLINE | ID: mdl-36136882

RESUMEN

Interior tomography of X-ray computed tomography (CT) has many advantages, such as a lower radiation dose and lower detector hardware cost compared to traditional CT. However, this imaging technique only uses the projection data passing through the region of interest (ROI) for imaging; accordingly, the projection data are truncated at both ends of the detector, so the traditional analytical reconstruction algorithm cannot satisfy the demand of clinical diagnosis. To solve the above limitations, in this paper we propose a high-quality statistical iterative reconstruction algorithm that uses the zeroth-order image moment as novel prior knowledge; the zeroth-order image moment can be estimated in the projection domain using the Helgason-Ludwig consistency condition. Then, the L1norm of sparse representation, in terms of dictionary learning, and the zeroth-order image moment constraints are incorporated into the statistical iterative reconstruction framework to construct an objective function. Finally, the objective function is minimized using an alternating minimization iterative algorithm. The chest CT image simulated and CT real data experimental results demonstrate that the proposed approach can remove shift artifacts effectively and has superior performance in removing noise and persevering fine structures than the total variation (TV)-based approach.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Artefactos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos
4.
J Xray Sci Technol ; 30(4): 805-822, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35599528

RESUMEN

Tube of X-ray computed tomography (CT) system emitting a polychromatic spectrum of photons leads to beam hardening artifacts such as cupping and streaks, while the metal implants in the imaged object results in metal artifacts in the reconstructed images. The simultaneous emergence of various beam-hardening artifacts degrades the diagnostic accuracy of CT images in clinics. Thus, it should be deeply investigated for suppressing such artifacts. In this study, data consistency condition is exploited to construct an objective function. Non-convex optimization algorithm is employed to solve the optimal scaling factors. Finally, an optimal bone correction is acquired to simultaneously correct for cupping, streaks and metal artifacts. Experimental result acquired by a realistic computer simulation demonstrates that the proposed method can adaptively determine the optimal scaling factors, and then correct for various beam-hardening artifacts in the reconstructed CT images. Especially, as compared to the nonlinear least squares before variable substitution, the running time of the new CT image reconstruction algorithm decreases 82.36% and residual error reduces 55.95%. As compared to the nonlinear least squares after variable substitution, the running time of the new algorithm decreases 67.54% with the same residual error.


Asunto(s)
Artefactos , Tomografía Computarizada por Rayos X , Algoritmos , Simulación por Computador , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen
5.
Med Phys ; 48(10): 6421-6436, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34514608

RESUMEN

PURPOSE: Four-dimensional cone-beam computed tomography (4D CBCT) is developed to reconstruct a sequence of phase-resolved images, which could assist in verifying the patient's position and offering information for cancer treatment planning. However, 4D CBCT images suffer from severe streaking artifacts and noise due to the extreme sparse-view CT reconstruction problem for each phase. As a result, it would cause inaccuracy of treatment estimation. The purpose of this paper was to develop a new 4D CBCT reconstruction method to generate a series of high spatiotemporal 4D CBCT images. METHODS: Considering the advantage of (DL) on representing structural features and correlation between neighboring pixels effectively, we construct a novel DL-based method for the 4D CBCT reconstruction. In this study, both a motion-aware dictionary and a spatially structural 2D dictionary are trained for 4D CBCT by excavating the spatiotemporal correlation among ten phase-resolved images and the spatial information in each image, respectively. Specifically, two reconstruction models are produced in this study. The first one is the motion-aware dictionary learning-based 4D CBCT algorithm, called motion-aware DL based 4D CBCT (MaDL). The second one is the MaDL equipped with a prior knowledge constraint, called pMaDL. Qualitative and quantitative evaluations are performed using a 4D extended cardiac torso (XCAT) phantom, simulated patient data, and two sets of patient data sets. Several state-of-the-art 4D CBCT algorithms, such as the McKinnon-Bates (MKB) algorithm, prior image constrained compressed sensing (PICCS), and the high-quality initial image-guided 4D CBCT reconstruction method (HQI-4DCBCT) are applied for comparison to validate the performance of the proposed MaDL and prior constraint MaDL (pMaDL) pmadl reconstruction frameworks. RESULTS: Experimental results validate that the proposed MaDL can output the reconstructions with few streaking artifacts but some structural information such as tumors and blood vessels, may still be missed. Meanwhile, the results of the proposed pMaDL demonstrate an improved spatiotemporal resolution of the reconstructed 4D CBCT images. In these improved 4D CBCT reconstructions, streaking artifacts are suppressed primarily and detailed structures are also restored. Regarding the XCAT phantom, quantitative evaluations indicate that an average of 58.70%, 45.25%, and 40.10% decrease in terms of root-mean-square error (RMSE) and an average of 2.10, 1.37, and 1.37 times in terms of structural similarity index (SSIM) are achieved by the proposed pMaDL method when compared with piccs, PICCS, MaDL(2D), and MaDL(2D), respectively. Moreover the proposed pMaDL achieves a comparable performance with HQI-4DCBCT algorithm in terms of RMSE and SSIM metrics. However, pMaDL has a better ability to suppress streaking artifacts than HQI-4DCBCT. CONCLUSIONS: The proposed algorithm could reconstruct a set of 4D CBCT images with both high spatiotemporal resolution and detailed features preservation. Moreover the proposed pMaDL can effectively suppress the streaking artifacts in the resultant reconstructions, while achieving an overall improved spatiotemporal resolution by incorporating the motion-aware dictionary with a prior constraint into the proposed 4D CBCT iterative framework.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada de Haz Cónico Espiral , Algoritmos , Tomografía Computarizada Cuatridimensional , Humanos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen
6.
Phys Med Biol ; 66(18)2021 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-34352735

RESUMEN

Iterative reconstruction frameworks show predominance in low dose and incomplete data situations. In the iterative reconstruction framework, there are two components, i.e., the fidelity term aims to maintain the structure details of the reconstructed object, and the regularization term uses prior information to suppress the artifacts such as noise. A regularization parameter balances them, aiming to find a good trade-off between noise and resolution. Currently, the regularization parameters are selected as a rule of thumb or some prior knowledge assumption is required, which limits practical uses. Furthermore, the computation cost of regularization parameter selection is also heavy. In this paper, we address this problem by introducing CT image quality assessment (IQA) into the iterative reconstruction framework. Several steps are involved during the study. First, we analyze the CT image statistics using the dual dictionary (DDL) method. Regularities are observed and concluded, revealing the relationship among the regularization parameter, iterations, and CT image quality. Second, with derivation and simplification of DDL procedure, a CT IQA metric named SODVAC is designed. The SODVAC locates the optimal regularization parameter that results in the reconstructed image with distinct structures and with no noise or little noise. Thirdly, we introduce SODVAC into the iterative reconstruction framework and then propose a general image-quality-guided iterative reconstruction (QIR) framework and give a specific framework example (sQIR) by introducing SODVAC into the iterative reconstruction framework. sQIR simultaneously optimizes the reconstructed image and the regularization parameter during the iterations. Results confirm the effectiveness of the proposed method. No prior information is needed and the low computation cost are the advantages of our method compared with existing state-of-the-art L-curve and ZIP selection strategies.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Artefactos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X
7.
IEEE Trans Med Imaging ; 40(11): 3054-3064, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34010129

RESUMEN

Four-dimensional cone-beam computed tomography (4D CBCT) has been developed to provide a sequence of phase-resolved reconstructions in image-guided radiation therapy. However, 4D CBCT images are degraded by severe streaking artifacts and noise because the phase-resolved image is an extremely sparse-view CT procedure wherein a few under-sampled projections are used for the reconstruction of each phase. Aiming at improving the overall quality of 4D CBCT images, we proposed two CNN models, named N-Net and CycN-Net, respectively, by fully excavating the inherent property of 4D CBCT. To be specific, the proposed N-Net incorporates the prior image reconstructed from entire projection data based on U-Net to boost the image quality for each phase-resolved image. Based on N-Net, a temporal correlation among the phase-resolved images is also considered by the proposed CycN-Net. Extensive experiments on both XCAT simulation data and real patient 4D CBCT datasets were carried out to verify the feasibility of the proposed CNNs. Both networks can effectively suppress streaking artifacts and noise while restoring the distinct features simultaneously, compared with the existing CNN models and two state-of-the-art iterative algorithms. Moreover, the proposed method is robust in handling complicated tasks of various patient datasets and imaging devices, which implies its excellent generalization ability.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada de Haz Cónico Espiral , Algoritmos , Tomografía Computarizada Cuatridimensional , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fantasmas de Imagen
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5428-5431, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33019208

RESUMEN

Deep learning based radiomics have made great progress such as CNN based diagnosis and U-Net based segmentation. However, the prediction of drug effectiveness based on deep learning has fewer studies. Choroidal neovascularization (CNV) and cystoid macular edema (CME) are the diseases often leading to a sudden onset but progressive decline in central vision. And the curative treatment using anti-vascular endothelial growth factor (anti-VEGF) may not be effective for some patients. Therefore, the prediction of the effectiveness of anti-VEGF for patients is important. With the development of Convolutional Neural Networks (CNNs) coupled with transfer learning, medical image classifications have achieved great success. We used a method based on transfer learning to automatically predict the effectiveness of anti-VEGF by Optical Coherence tomography (OCT) images before giving medication. The method consists of image preprocessing, data augmentation and CNN-based transfer learning, the prediction AUC can be over 0.8. We also made a comparison study of using lesion region images and full OCT images on this task. Experiments shows that using the full OCT images can obtain better performance. Different deep neural networks such as AlexNet, VGG-16, GooLeNet and ResNet-50 were compared, and the modified ResNet-50 is more suitable for predicting the effectiveness of anti-VEGF.Clinical Relevance - This prediction model can give an estimation of whether anti-VEGF is effective for patients with CNV or CME, which can help ophthalmologists make treatment plan.


Asunto(s)
Aprendizaje Profundo , Tomografía de Coherencia Óptica , Algoritmos , Bevacizumab , Humanos , Redes Neurales de la Computación
9.
IEEE Trans Med Imaging ; 39(10): 2996-3007, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32217474

RESUMEN

Photon-counting spectral computed tomography (CT) is capable of material characterization and can improve diagnostic performance over traditional clinical CT. However, it suffers from photon count starving for each individual energy channel which may cause severe artifacts in the reconstructed images. Furthermore, since the images in different energy channels describe the same object, there are high correlations among different channels. To make full use of the inter-channel correlations and minimize the count starving effect while maintaining clinically meaningful texture information, this paper combines a region-specific texture model with a low-rank correlation descriptor as an a priori regularization to explore a superior texture preserving Bayesian reconstruction of spectral CT. Specifically, the inter-channel correlations are characterized by the low-rank representation, and the inner-channel regional textures are modeled by a texture preserving Markov random field. In other words, this paper integrates the spectral and spatial information into a unified Bayesian reconstruction framework. The widely-used Split-Bregman algorithm is employed to minimize the objective function because of the non-differentiable property of the low-rank representation. To evaluate the tissue texture preserving performance of the proposed method for each channel, three references are built for comparison: one is the traditional CT image from energy integration detection. The second one is spectral images from dual-energy CT. The third one is individual channels images from custom-made photon-counting spectral CT. As expected, the proposed method produced promising results in terms of not only preserving texture features but also suppressing image noise in each channel, comparing to existing methods of total variation (TV), low-rank TV and tensor dictionary learning, by both visual inspection and quantitative indexes of root mean square error, peak signal to noise ratio, structural similarity and feature similarity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Teorema de Bayes , Fantasmas de Imagen , Relación Señal-Ruido
10.
Sensors (Basel) ; 20(6)2020 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-32188068

RESUMEN

Low dose computed tomography (CT) has drawn much attention in the medical imaging field because of its ability to reduce the radiation dose. Recently, statistical iterative reconstruction (SIR) with total variation (TV) penalty has been developed to low dose CT image reconstruction. Nevertheless, the TV penalty has the drawback of creating blocky effects in the reconstructed images. To overcome the limitations of TV, in this paper we firstly introduce the structure tensor total variation (STV1) penalty into SIR framework for low dose CT image reconstruction. Then, an accelerated fast iterative shrinkage thresholding algorithm (AFISTA) is developed to minimize the objective function. The proposed AFISTA reconstruction algorithm was evaluated using numerical simulated low dose projection based on two CT images and realistic low dose projection data of a sheep lung CT perfusion. The experimental results demonstrated that our proposed STV1-based algorithm outperform FBP and TV-based algorithm in terms of removing noise and restraining blocky effects.


Asunto(s)
Encéfalo/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Animales , Humanos , Fantasmas de Imagen , Dosis de Radiación , Ovinos
11.
Med Phys ; 47(5): 2099-2115, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32017128

RESUMEN

PURPOSE: Four-dimensional cone-beam computed tomography (4D CBCT) has been developed to provide a sequence of phase-resolved reconstructions in image-guided radiation therapy. However, 4D CBCT images are degraded by severe streaking artifacts because the 4D CBCT reconstruction process is an extreme sparse-view CT procedure wherein only under-sampled projections are used for the reconstruction of each phase. To obtain a set of 4D CBCT images achieving both high spatial and temporal resolution, we propose an algorithm by providing a high-quality initial image at the beginning of the iterative reconstruction process for each phase to guide the final reconstructed result toward its optimal solution. METHODS: The proposed method consists of three steps to generate the initial image. First, a prior image is obtained by an iterative reconstruction method using the measured projections of the entire set of 4D CBCT images. The prior image clearly shows the appearance of structures in static regions, although it contains blurring artifacts in motion regions. Second, the robust principal component analysis (RPCA) model is adopted to extract the motion components corresponding to each phase-resolved image. Third, a set of initial images are produced by the proposed linear estimation model that combines the prior image and the RPCA-decomposed motion components. The final 4D CBCT images are derived from the simultaneous algebraic reconstruction technique (SART) equipped with the initial images. Qualitative and quantitative evaluations were performed by using two extended cardiac-torso (XCAT) phantoms and two sets of patient data. Several state-of-the-art 4D CBCT algorithms were performed for comparison to validate the performance of the proposed method. RESULTS: The image quality of phase-resolved images is greatly improved by the proposed method in both phantom and patient studies. The results show an outstanding spatial resolution, in which streaking artifacts are suppressed to a large extent, while detailed structures such as tumors and blood vessels are well restored. Meanwhile, the proposed method depicts a high temporal resolution with a distinct respiratory motion change at different phases. For simulation phantom, quantitative evaluations of the simulation data indicate that an average of 36.72% decrease at EI phase and 42% decrease at EE phase in terms of root-mean-square error (RMSE) are achieved by our method when comparing with PICCS algorithm in Phantom 1 and Phantom 2. In addition, the proposed method has the lowest entropy and the highest normalized mutual information compared with the existing methods in simulation experiments, such as PRI, RPCA-4DCT, SMART, and PICCS. And for real patient cases, the proposed method also achieves the lowest entropy value compared with the competitive method. CONCLUSIONS: The proposed algorithm can generate an optimal initial image to improve iterative reconstruction performance. The final sequence of phase-resolved volumes guided by the initial image achieves high spatiotemporal resolution by eliminating motion-induced artifacts. This study presents a practical 4D CBCT reconstruction method with leading image quality.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada Cuatridimensional , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Análisis de Componente Principal , Control de Calidad
12.
IEEE Trans Med Imaging ; 39(1): 246-258, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31251178

RESUMEN

X-ray spectrum plays a very important role in dual energy computed tomography (DECT) reconstruction. Because it is difficult to measure x-ray spectrum directly in practice, efforts have been devoted into spectrum estimation by using transmission measurements. These measurement methods are independent of the image reconstruction, which bring extra cost and are time consuming. Furthermore, the estimated spectrum mismatch would degrade the quality of the reconstructed images. In this paper, we propose a spectrum estimation-guided iterative reconstruction algorithm for DECT which aims to simultaneously recover the spectrum and reconstruct the image. The proposed algorithm is formulated as an optimization framework combining spectrum estimation based on model spectra representation, image reconstruction, and regularization for noise suppression. To resolve the multi-variable optimization problem of simultaneously obtaining the spectra and images, we introduce the block coordinate descent (BCD) method into the optimization iteration. Both the numerical simulations and physical phantom experiments are performed to verify and evaluate the proposed method. The experimental results validate the accuracy of the estimated spectra and reconstructed images under different noise levels. The proposed method obtains a better image quality compared with the reconstructed images from the known exact spectra and is robust in noisy data applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Abdomen/diagnóstico por imagen , Algoritmos , Simulación por Computador , Humanos , Modelos Biológicos , Fantasmas de Imagen
13.
Artículo en Inglés | MEDLINE | ID: mdl-31794396

RESUMEN

Previous research on no-reference (NR) quality assessment of multiply-distorted images focused mainly on three distortion types (white noise, Gaussian blur, and JPEG compression), while in practice images can be contaminated by many other common distortions due to the various stages of processing. Although MUSIQUE (MUltiply-and Singly-distorted Image QUality Estimator) Zhang et al., TIP 2018 is a successful NR algorithm, this approach is still limited to the three distortion types. In this paper, we extend MUSIQUE to MUSIQUE-II to blindly assess the quality of images corrupted by five distortion types (white noise, Gaussian blur, JPEG compression, JPEG2000 compression, and contrast change) and their combinations. The proposed MUSIQUE-II algorithm builds upon the classification and parameter-estimation framework of its predecessor by using more advanced models and a more comprehensive set of distortion-sensitive features. Specifically, MUSIQUE-II relies on a three-layer classification model to identify 19 distortion types. To predict the five distortion parameter values, MUSIQUE-II extracts an additional 14 contrast features and employs a multi-layer probability-weighting rule. Finally, MUSIQUE-II employs a new most-apparent-distortion strategy to adaptively combine five quality scores based on outputs of three classification models. Experimental results tested on three multiply-distorted and six singly-distorted image quality databases show that MUSIQUE-II yields not only a substantial improvement in quality predictive performance as compared with its predecessor, but also highly competitive performance relative to other state-of-the-art FR/NR IQA algorithms.

14.
Phys Med Biol ; 63(21): 215008, 2018 10 24.
Artículo en Inglés | MEDLINE | ID: mdl-30277889

RESUMEN

Genetic studies have identified associations between gene mutations and clear cell renal cell carcinoma (ccRCC). Since the complete gene mutational landscape cannot be characterized through biopsy and sequencing assays for each patient, non-invasive tools are needed to determine the mutation status for tumors. Radiogenomics may be an attractive alternative tool to identify disease genomics by analyzing amounts of features extracted from medical images. Most current radiogenomics predictive models are built based on a single classifier and trained through a single objective. However, since many classifiers are available, selecting an optimal model is challenging. On the other hand, a single objective may not be a good measure to guide model training. We proposed a new multi-classifier multi-objective (MCMO) radiogenomics predictive model. To obtain more reliable prediction results, similarity-based sensitivity and specificity were defined and considered as the two objective functions simultaneously during training. To take advantage of different classifiers, the evidential reasoning (ER) approach was used for fusing the output of each classifier. Additionally, a new similarity-based multi-objective optimization algorithm (SMO) was developed for training the MCMO to predict ccRCC related gene mutations (VHL, PBRM1 and BAP1) using quantitative CT features. Using the proposed MCMO model, we achieved a predictive area under the receiver operating characteristic curve (AUC) over 0.85 for VHL, PBRM1 and BAP1 genes with balanced sensitivity and specificity. Furthermore, MCMO outperformed all the individual classifiers, and yielded more reliable results than other optimization algorithms and commonly used fusion strategies.


Asunto(s)
Carcinoma de Células Renales/genética , Carcinoma de Células Renales/radioterapia , Neoplasias Renales/genética , Neoplasias Renales/radioterapia , Modelos Estadísticos , Mutación , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/patología , Humanos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/patología , Persona de Mediana Edad , Pronóstico , Tomografía Computarizada por Rayos X
15.
Artículo en Inglés | MEDLINE | ID: mdl-29994707

RESUMEN

The recent popularity of remote desktop software and live streaming of composited video has given rise to a growing number of applications which make use of so-called screen content images that contain a mixture of text, graphics, and photographic imagery. Automatic quality assessment (QA) of screen-content images is necessary to enable tasks such as quality monitoring, parameter adaptation, and other optimizations. Although QA of natural images has been heavily researched over the last several decades, QA of screen content images is a relatively new topic. In this paper, we present a QA algorithm, called convolutional neural network (CNN) based screen content image quality estimator (CNN-SQE), which operates via a fuzzy classification of screen content images into plain-text, computergraphics/ cartoons, and natural-image regions. The first two classes are considered to contain synthetic content (text/graphics), and the latter two classes are considered to contain naturalistic content (graphics/photographs), where the overlap of the classes allows the computer graphics/cartoons segments to be analyzed by both text-based and natural-image-based features. We present a CNN-based approach for the classification, an edge-structurebased quality degradation model, and a region-size-adaptive quality-fusion strategy. As we will demonstrate, the proposed CNN-SQE algorithm can achieve better/competitive performance as compared with other state-of-the-art QA algorithms.

16.
Phys Med Biol ; 63(13): 135015, 2018 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-29863486

RESUMEN

In computed tomography (CT), the polychromatic characteristics of x-ray photons, which are emitted from a source, interact with materials and are absorbed by a detector, may lead to beam-hardening effect in signal detection and image formation, especially in situations where materials of high attenuation (e.g. the bone or metal implants) are in the x-ray beam. Usually, a beam-hardening correction (BHC) method is used to suppress the artifacts induced by bone or other objects of high attenuation, in which a calibration-oriented iterative operation is carried out to determine a set of parameters for all situations. Based on the Helgasson-Ludwig consistency condition (HLCC), an optimization based method has been proposed by turning the calibration-oriented iterative operation of BHC into solving an optimization problem sustained by projection data. However, the optimization based HLCC-BHC method demands the engagement of a large number of neighboring projection views acquired at relatively high and uniform angular sampling rate, hindering its application in situations where the angular sampling in projection view is sparse or non-uniform. By defining an objective function based on the data integral invariant constraint (DIIC), we again turn BHC into solving an optimization problem sustained by projection data. As it only needs a pair of projection views at any view angle, the proposed BHC method can be applicable in the challenging scenarios mentioned above. Using the projection data simulated by computer, we evaluate and verify the proposed optimization based DIIC-BHC method's performance. Moreover, with the projection data of a head scan by a multi-detector row MDCT, we show the proposed DIIC-BHC method's utility in clinical applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Artefactos , Calibración , Humanos , Procesamiento de Imagen Asistido por Computador/normas , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/normas
17.
IEEE Trans Med Imaging ; 37(6): 1348-1357, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29870364

RESUMEN

The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.


Asunto(s)
Dosis de Radiación , Procesamiento de Señales Asistido por Computador , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Artefactos , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador
18.
J Xray Sci Technol ; 26(4): 603-622, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29689766

RESUMEN

Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos , Animales , Humanos , Pulmón/diagnóstico por imagen , Fantasmas de Imagen , Ovinos
19.
IEEE Trans Med Imaging ; 36(12): 2466-2478, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28981411

RESUMEN

Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Bases de Datos Factuales , Femenino , Cabeza/diagnóstico por imagen , Humanos , Masculino , Próstata/diagnóstico por imagen
20.
J Xray Sci Technol ; 25(6): 907-926, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28697578

RESUMEN

BACKGROUND: In regularized iterative reconstruction algorithms, the selection of regularization parameter depends on the noise level of cone beam projection data. OBJECTIVE: Our aim is to propose an algorithm to estimate the noise level of cone beam projection data. METHODS: We first derived the data correlation of cone beam projection data in the Fourier domain, based on which, the signal and the noise were decoupled. Then the noise was extracted and averaged for estimation. An adaptive regularization parameter selection strategy was introduced based on the estimated noise level. Simulation and real data studies were conducted for performance validation. RESULTS: There exists an approximately zero-energy double-wedge area in the 3D Fourier domain of cone beam projection data. As for the noise level estimation results, the averaged relative errors of the proposed algorithm in the analytical/MC/spotlight-mode simulation experiments were 0.8%, 0.14% and 0.24%, respectively, and outperformed the homogeneous area based as well as the transformation based algorithms. Real studies indicated that the estimated noise levels were inversely proportional to the exposure levels, i.e., the slopes in the log-log plot were -1.0197 and -1.049 with respect to the short-scan and half-fan modes. The introduced regularization parameter selection strategy could deliver promising reconstructed image qualities. CONCLUSIONS: Based on the data correlation of cone beam projection data in Fourier domain, the proposed algorithm could estimate the noise level of cone beam projection data accurately and robustly. The estimated noise level could be used to adaptively select the regularization parameter.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos , Humanos , Fantasmas de Imagen , Dispersión de Radiación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...